perm filename IDEAS[E87,JMC] blob
sn#844249 filedate 1987-08-07 generic text, type C, neo UTF8
COMMENT ⊗ VALID 00002 PAGES
C REC PAGE DESCRIPTION
C00001 00001
C00002 00002 ideas[e87,jmc] dependence of concepts on situations
C00013 ENDMK
C⊗;
ideas[e87,jmc] dependence of concepts on situations
dependence of concepts on theories
dependence of concepts on state of knowledge
1. The fact that discriminating the chairs that actually exist
in the house doesn't require a precise concept. In practice
chairs make up a natural kind.
2. Counterfactuals depend on theories.
3. Causality statments depend on the state of knowledge.
Kinds may exist in the physical or social world and may have properties
that are independent of a particular person's experience. A person
encounters a particular kind in a limited context. The optimal strategy
is to create a label for a new kind and attach the experience and
any generalizations that may seem reasonable tentatively to the label.
july 21
New ideas may be introduced by extending the language in ways that
are monotonically conservative but non-monotonically non-conservative.
Thus there are no new theorems, but the simplest models change.
For example, if we introduce atoms by saying that there might
be such things, this has no monotonic force. However, the simplest
model of observed phenomena combined with the sentences about atoms
may now be that there actually are atoms.
There are two examples I'd like to be able to do - both scientific.
1. Introducing the concept of atom and its support by means of the
law of combining proportions.
2. Adding Charles's law about the dependence of pressure on temperature
to a system with Boyle's law about the dependence of volume on pressure.
Has anyone investigated conservative extensions to languages other
than definitions?
Scientific induction, i.e. finding the simplest explanation of
some phenomenon, is non-monotonic reasoning. Give some examples
do doing it by circumscription.
July 25
Generalized turnpike theorem
Suppose humanity had decided that the most sustainable future was
a population of one million living in a garden continent and wanted
to be able to sustain this population living its ecological life
for as long as possible. Given this goal, the way to begin is with
a breakneck economic expansion leading to expansion into space,
the occupation of all galaxies within reach, their conversion into
black hole energy machines, all so as to sustain the eventually
produced ecological idyll as long as possible. Starting the idyll
right away would make it last perhaps 10↑9 years with good luck
or very much less with bad luck. Whereas conquering the universe
first might make the idyll last (say) 10↑60 years.
Indeed whatever the very long term goal, the initial steps are the same ---
conquering the universe. There are two analogies.
1. The turnpike theorems in economics. The simplest assume a linear economy.
The resources available in the initial period determine what can
be produced and what is consumed in a year, and this gives the
situation a year later. Such an economy is characterized by a matrix
that converts the vector of goods available at a given time into
the vector available a year later. This matrix has a largest eigenvalue
with an associated eigenvector. The turnpike theorem for this system
states that if one wants to maximize something after many years, the
optimal strategy is to produce a bill of goods proportional to the
this eigenvector, then expand the economy at the maximum rate specified
by this maximum eigenvector until one is close to the time specified
in the goal and then to modify the strategy to maximize whatever it is.
The metaphor is that to go anywhere, one first drives to the nearest turnpike,
then along the turnpike till one gets close to the destination and
then to the destination.
2. The best strategy in giveaway chess or checkers is to play ordinary
chess or checkers until one accumulates an overwhelming advantage and
reduces the opponent to a single piece. Then one can feed one's pieces
to the opponent one at a time, and he can't prevent it.
The generalization is that in many systems where one may try to achieve
a goal after a long time, there is a natural concept of power. From a
position of power, any other position is most easily attained.
Is there a concept of powerful position in mechanical systems subject
to external forces?
Aug 1
Analogous to changing programs without reading them is changing
beliefs without knowing precisely how they are expressed with a
"belief modification language".
Aug 4
In class today: invariant(color,move) requires quite a lot of
reification to be acceptable.
Timothy's "Go to the park, pick up broken glass" shows that quite a
lot not presently observable goes on in a baby's mind.
Aug 7
Maybe the actual translation of human speech is not into a first order
language. The first order formula is holds(p,c), where p is the
sentence spoken, and c is a context.
Aug 8
One of the students in the AI WICS course, maybe Jeff Cohen, grumbled
that you had to have all the reasons for exceptions listed. I denied
this and gave the birds problem as an example, supposing it formalized
in Prolog. Suppose we want to know something that flies, so we
give Prolog the query flies(X). Prolog finds
flies(X) ← bird(X), ¬ab2(X).
It finds bird(tweety) in its database and must establish ¬ab2(tweety),
which it proposes to do by negation as failure, and so it establishes
a goal ab2(tweety). Suppose the database contains
ab2(X) ← ostrich(X).
and
ab2(X) ← penguin(X).
It then is led successively to try ostrich(tweety) and penguin(tweety)
which fail if these are not in the database.
Cohen, if that's who it was, objected that human reasoning wouldn't
ever think of asking ostrich(tweety) or penguin(tweety), and I had
to admit he was right. This suggests the following. Use a different
logic programming language than Prolog, maybe MRS would be right in which
some rules are forward chaining. We put in the forward chaining rules
ostrich(X) → ab2(X).
and
penguin(X) → ab2(X).
and omit the corresponding backward chaining rules. Then if Tweety is
not an ostrich or penguin, the backward chaining ends with ab2(tweety),
i.e. this is not in the database, and we conclude that Tweety can fly.
If Tweety is an ostrich, then when ostrich(tweety) got in the database,
ab2(tweety) would also get in the database on account of the forward
chaining rule.
This behavior corresponds to the human behavior that Cohen
referred to.